12 research outputs found

    Comparative analysis of model-based predictive shared control for delayed operation in object reaching and recognition tasks with tactile sensing

    Get PDF
    Communication delay represents a fundamental challenge in telerobotics: on one hand, it compromises the stability of teleoperated robots, on the other hand, it decreases the user’s awareness of the designated task. In scientific literature, such a problem has been addressed both with statistical models and neural networks (NN) to perform sensor prediction, while keeping the user in full control of the robot’s motion. We propose shared control as a tool to compensate and mitigate the effects of communication delay. Shared control has been proven to enhance precision and speed in reaching and manipulation tasks, especially in the medical and surgical fields. We analyse the effects of added delay and propose a unilateral teleoperated leader-follower architecture that both implements a predictive system and shared control, in a 1-dimensional reaching and recognition task with haptic sensing. We propose four different control modalities of increasing autonomy: non-predictive human control (HC), predictive human control (PHC), (shared) predictive human-robot control (PHRC), and predictive robot control (PRC). When analyzing how the added delay affects the subjects’ performance, the results show that the HC is very sensitive to the delay: users are not able to stop at the desired position and trajectories exhibit wide oscillations. The degree of autonomy introduced is shown to be effective in decreasing the total time requested to accomplish the task. Furthermore, we provide a deep analysis of environmental interaction forces and performed trajectories. Overall, the shared control modality, PHRC, represents a good trade-off, having peak performance in accuracy and task time, a good reaching speed, and a moderate contact with the object of interest

    In situ 4D tomography image analysis framework to follow sintering within 3D-printed glass scaffolds

    Get PDF
    We propose a novel image analysis framework to automate analysis of X-ray microtomography images of sintering ceramics and glasses, using open-source toolkits and machine learning. Additive manufacturing (AM) of glasses and ceramics usually requires sintering of green bodies. Sintering causes shrinkage, which presents a challenge for controlling the metrology of the final architecture. Therefore, being able to monitor sintering in 3D over time (termed 4D) is important when developing new porous ceramics or glasses. Synchrotron X-ray tomographic imaging allows in situ, real-time capture of the sintering process at both micro and macro scales using a furnace rig, facilitating 4D quantitative analysis of the process. The proposed image analysis framework is capable of tracking and quantifying the densification of glass or ceramic particles within multiple volumes of interest (VOIs) along with structural changes over time using 4D image data. The framework is demonstrated by 4D quantitative analysis of bioactive glass ICIE16 within a 3D-printed scaffold. Here, densification of glass particles within 3 VOIs were tracked and quantified along with diameter change of struts and interstrut pore size over the 3D image series, delivering new insights on the sintering mechanism of ICIE16 bioactive glass particles in both micro and macro scale

    Facial Expression Rendering in Medical Training Simulators: Current Status and Future Directions

    Get PDF
    Recent technological advances in robotic sensing and actuation methods have prompted development of a range of new medical training simulators with multiple feedback modalities. Learning to interpret facial expressions of a patient during medical examinations or procedures has been one of the key focus areas in medical training. This paper reviews facial expression rendering systems in medical training simulators that have been reported to date. Facial expression rendering approaches in other domains are also summarized to incorporate the knowledge from those works into developing systems for medical training simulators. Classifications and comparisons of medical training simulators with facial expression rendering are presented, and important design features, merits and limitations are outlined. Medical educators, students and developers are identified as the three key stakeholders involved with these systems and their considerations and needs are presented. Physical-virtual (hybrid) approaches provide multimodal feedback, present accurate facial expression rendering, and can simulate patients of different age, gender and ethnicity group; makes it more versatile than virtual and physical systems. The overall findings of this review and proposed future directions are beneficial to researchers interested in initiating or developing such facial expression rendering systems in medical training simulators.This work was supported by the Robopatient project funded by the EPSRC Grant No EP/T00519X/

    Detection and tracking volumes of interest in 3D printed tissue engineering scaffolds using 4D imaging modalities.

    Get PDF
    Additive manufacturing (AM) platforms allow the production of patient tissue engineering scaffolds with desirable architectures. Although AM platforms offer exceptional control on architecture, post-processing methods such as sintering and freeze-drying often deform the printed scaffold structure. In-situ 4D imaging can be used to analyze changes that occur during post-processing. Visualization and analysis of changes in selected volumes of interests (VOIs) over time are essential to understand the underlining mechanisms of scaffold deformations. Yet, automated detection and tracking of VOIs in the 3D printed scaffold over time using 4D image data is currently an unsolved image processing task. This paper proposes a new image processing technique to segment, detect and track volumes of interest in 3D printed tissue engineering scaffolds. The method is validated using a 4D synchrotron sourced microCT image data captured during the sintering of bioactive glass scaffolds in-situ. The proposed method will contribute to the development of scaffolds with controllable designs and optimum properties for the development of patient-specific scaffolds

    In situ 4D tomography image analysis framework to follow sintering within 3D-printed glass scaffolds

    Get PDF
    We propose a novel image analysis framework to automate analysis of X-ray microtomography images of sintering ceramics and glasses, using open-source toolkits and machine learning. Additive manufacturing (AM) of glasses and ceramics usually requires sintering of green bodies. Sintering causes shrinkage, which presents a challenge for controlling the metrology of the final architecture. Therefore, being able to monitor sintering in 3D over time (termed 4D) is important when developing new porous ceramics or glasses. Synchrotron X-ray tomographic imaging allows in situ, real-time capture of the sintering process at both micro and macro scales using a furnace rig, facilitating 4D quantitative analysis of the process. The proposed image analysis framework is capable of tracking and quantifying the densification of glass or ceramic particles within multiple volumes of interest (VOIs) along with structural changes over time using 4D image data. The framework is demonstrated by 4D quantitative analysis of bioactive glass ICIE16 within a 3D-printed scaffold. Here, densification of glass particles within 3 VOIs were tracked and quantified along with diameter change of struts and interstrut pore size over the 3D image series, delivering new insights on the sintering mechanism of ICIE16 bioactive glass particles in both micro and macro scales

    DeforMoBot: a bio-inspired deformable mobile robot for navigation among obstacles

    No full text
    Many animals can move in cluttered environments by conforming their body shape to geometric constraints in their surroundings such as narrow gaps. Most robots are rigid structures and do not possess these capabilities. Navigation around movable or compliant obstacles results in a loss of efficiency—and possible mission failure—compared to progression through them. In this paper, we propose the novel design of a deformable mobile robot; it can adopt a wider stance for greater stability (and possible higher payload capacity), or a narrower stance to become capable of fitting through small gaps and progressing through flexible obstacles. We use a whisker-based feedback control approach in order to match the amount of the robot's deformation with the compliance level of the obstacle. We present a real-time algorithm which uses whisker feedback and performs shape adjustment in uncalibrated environments. The developed robot was tested navigating among obstacles with varying physical properties from different approach angles. Our results highlight the importance of co-development of environment perception and physical reaction capabilities for improved performance of mobile robots in unstructured environments

    Development of wearable fingertip tactile display driven by bowden cables

    No full text
    This paper presents the development and human interaction evaluation of a Bowden cable based wearable fingertip tactile display. This device is designed to be used in the field of virtual reality and teleoperation to render different types of tactile sensations such as grip force, slipping, roughness and softness through delivering normal force, skin stretch, tangential movement and vibration indication to the user. This paper evaluates the proposed device’s capability in delivering individual taxel actuation through user testing. A four taxel actuation system fixed to a mild steel skeleton is covered in silicone rubber to ensure wearer comfort. A secondary mechanism is developed to provide sliding and lateral skin stretch sensation to the user. In addition, an 8 mm diameter piezo vibration motor is used to deliver vibration to indicate slipping to the user. The force feedback system consist of four independently operable taxels positioned at 2mm center to center distance on the fingertip. Each taxel was actuated via a Bowden cable connected to a geared DC motor, mounted on a lower arm worn sleeve. A taxel discrimination experiment was done to validate human discrimination ability of each taxel and the results showed that a healthy human can distinguish each taxel with 87.45 % mean accuracy

    A Novel fabrication method for rapid prototyping of soft structures with embedded pneumatic channels

    No full text
    Soft robotics is a major disruptive technology that is rapidly revolutionizing the world of robotics. As the design optimization of these soft robotic structures are still in its infancy, their designers have to resort to prototype testing. This paper describes how a novel casting method based on a 2D layered approach and thermal programming of pneumatic tubing can be used to simplify soft structure prototyping. The proposed casting method is based on the sequential stacking of laser-cut pre-fabricated plates, i.e. PMMA (acrylic) sheets, to create a 3D mold, instead of the traditional methods of fabricating 3D molds, such as CNC machining or 3D printing. Contemporary soft robotic applications are more interested in pneumatic actuation and thus require pneumatic channels embedded within their structure. Creation of channels is a critical factor that limit the fabrication scope of most such soft structures. A simple solution is using Polyurethane (PU) tubing to create channels within soft structures. A limitation of PU tubes is that, they cannot be directly embedded as any twist added to obtain the required path of the tube adds a strain on the soft structure from within, which can affect the desired operation. Hence, the authors propose removing the strain on the PU tubes by thermally programming the required shape onto the PU tube. PU tubes reinforced with copper cores are bent in to the desired shape and are heat treated to program the desired shape. After placing the programmed tubes within the mold, silicon rubber can be simply poured into the mold; and the finished structure can be taken out of the mold once cured. Main purpose of this paper is to present these two novel fabrication methods to simplify soft robotic prototyping, without the need for advanced, costly, complex equipment

    Meal assistance robots : a review on current status, challenges and future directions

    No full text
    Need of assistive robots for performing activities of daily living is increasing with the reduction of labor force in the welfare and nursing care. Self-feeding or eating is one of the primary activities of a human in his/her daily life. Devices such as assistive robots for self-feeding have been developed as a solution for this problem. The purpose of this paper is to review existing meal assistance robots. In the paper, identification of important design features like feeding techniques, advantages and limitations of control methods of meal assistance robots and different inputs signals are comprehensively discussed. Challenges for developing meal assistance robots and potential future directions are also discussed at the end

    Comparative analysis of model-based predictive shared control for delayed operation in object reaching and recognition tasks with tactile sensing

    No full text
    Communication delay represents a fundamental challenge in telerobotics: on one hand, it compromises the stability of teleoperated robots, on the other hand, it decreases the user’s awareness of the designated task. In scientific literature, such a problem has been addressed both with statistical models and neural networks (NN) to perform sensor prediction, while keeping the user in full control of the robot’s motion. We propose shared control as a tool to compensate and mitigate the effects of communication delay. Shared control has been proven to enhance precision and speed in reaching and manipulation tasks, especially in the medical and surgical fields. We analyse the effects of added delay and propose a unilateral teleoperated leader-follower architecture that both implements a predictive system and shared control, in a 1-dimensional reaching and recognition task with haptic sensing. We propose four different control modalities of increasing autonomy: non-predictive human control (HC), predictive human control (PHC), (shared) predictive human-robot control (PHRC), and predictive robot control (PRC). When analyzing how the added delay affects the subjects’ performance, the results show that the HC is very sensitive to the delay: users are not able to stop at the desired position and trajectories exhibit wide oscillations. The degree of autonomy introduced is shown to be effective in decreasing the total time requested to accomplish the task. Furthermore, we provide a deep analysis of environmental interaction forces and performed trajectories. Overall, the shared control modality, PHRC, represents a good trade-off, having peak performance in accuracy and task time, a good reaching speed, and a moderate contact with the object of interest
    corecore